2,580 research outputs found

    Trapped atoms in cavity QED: coupling quantized light and matter

    Get PDF
    On the occasion of the hundredth anniversary of Albert Einstein's annus mirabilis, we reflect on the development and current state of research in cavity quantum electrodynamics in the optical domain. Cavity QED is a field which undeniably traces its origins to Einstein's seminal work on the statistical theory of light and the nature of its quantized interaction with matter. In this paper, we emphasize the development of techniques for the confinement of atoms strongly coupled to high-finesse resonators and the experiments which these techniques enable

    Cavity QED with Single Atoms and Photons

    Get PDF
    Recent experimental advances in the field of cavity quantum electrodynamics (QED) have opened new possibilities for control of atom-photon interactions. A laser with "one and the same atom" demonstrates the theory of laser operation pressed to its conceptual limit. The generation of single photons on demand and the realization of cavity QED with well defined atomic numbers N = 0, 1, 2,... both represent important steps toward realizing diverse protocols in quantum information science. Coherent manipulation of the atomic state via Raman transitions provides a new tool in cavity QED for in situ monitoring and control of the atom-cavity system. All of these achievements share a common point of departure: the regime of strong coupling. It is thus interesting to consider briefly the history of the strong coupling criterion in cavity QED and to trace out the path that research has taken in the pursuit of this goal

    Stable marriage with general preferences

    Full text link
    We propose a generalization of the classical stable marriage problem. In our model, the preferences on one side of the partition are given in terms of arbitrary binary relations, which need not be transitive nor acyclic. This generalization is practically well-motivated, and as we show, encompasses the well studied hard variant of stable marriage where preferences are allowed to have ties and to be incomplete. As a result, we prove that deciding the existence of a stable matching in our model is NP-complete. Complementing this negative result we present a polynomial-time algorithm for the above decision problem in a significant class of instances where the preferences are asymmetric. We also present a linear programming formulation whose feasibility fully characterizes the existence of stable matchings in this special case. Finally, we use our model to study a long standing open problem regarding the existence of cyclic 3D stable matchings. In particular, we prove that the problem of deciding whether a fixed 2D perfect matching can be extended to a 3D stable matching is NP-complete, showing this way that a natural attempt to resolve the existence (or not) of 3D stable matchings is bound to fail.Comment: This is an extended version of a paper to appear at the The 7th International Symposium on Algorithmic Game Theory (SAGT 2014

    Observation of the Vacuum-Rabi Spectrum for One Trapped Atom

    Get PDF
    The transmission spectrum for one atom strongly coupled to the field of a high-finesse optical resonator is observed to exhibit a clearly resolved vacuum-Rabi splitting characteristic of the normal modes in the eigenvalue spectrum of the atom-cavity system. A new Raman scheme for cooling atomic motion along the cavity axis enables a complete spectrum to be recorded for an individual atom trapped within the cavity mode, in contrast to all previous measurements in cavity QED that have required averaging over many atoms.Comment: 5 pages with 4 figure

    Proton Beam Energy Characterization

    Get PDF
    Introduction The Los Alamos Isotope Production Facility (IPF) is actively engaged in the development of isotope production technologies that can utilize its 100 MeV proton beam. Characterization of the proton beam energy and current is vital for optimizing isotope production and accurately conducting research at the IPF. Motivation In order to monitor beam intensity during research irradiations, aluminum foils are interspersed in experimental stacks. A theoretical yield of 22Na from 27Al(p,x)22Na reactions is cal-culated using MCNP6 (Monte Carlo N-Particle), TRIM (Transport of Ions in Matter), and Andersen & Ziegler (A&Z) [1] computational models. For some recent experiments, experimentally measured activities did not match computational predictions. This discrepancy motivated further experimental investigations including a direct time-of-flight measurement of the proton beam energy upstream of the target stack. The Isotope Production Program now tracks the beam energy and current by a complement of experimental and computational methods described below. Material and Methods A stacked-foil activation technique, utilizing aluminum monitor foils [2] in conjunction with a direct time-of-flight measurement helps define the current and energy of the proton beam. Theoretical yields of 22Na activity generated in the Al monitor foils are compared with experimental measurements. Additionally, MCNP, TRIM, and A&Z computational simulations are compared with one another and with experimental data. Experimental Approach Thin foils (0.254mm) of high purity aluminum are encapsulated in kapton tape and stacked with Tb foils in between aluminum degraders. Following irradiation, the Al foils are assayed using γ-spectroscopy on calibrated HPGe detectors in the Chemistry Division countroom at LANL. We use the well-characterized 27Al(p,x)22Na energy dependent production cross section [3] to calculate a predicted yield of 22Na in each foil. Details of the experimental activity determination and associated uncertainties have been addressed previously [4]. The nominally stated beam parameters are 100 MeV and 100–120 nA for the foil stack irradiation experiments. Time-of-flight measurements performed in the month of January 2014 revealed beam energy of 99.1 ± 0.5 MeV. Computational Simulations Andersen & Zeigler (A&Z) is a deterministic method and also the simplest of the three com-putational methods considered. While the mean energy degradation can be calculated using the A&Z formalism, the beam current attenuation cannot. Consequentially, A&Z will also lack the ability to account for a broadening in the beam energy that a stochastic method affords. Additionally, A&Z does not account for nuclear recoil or contributions from secondary interactions. TRIM uses a stochastic based method to calculate the stopping range of incident particles applying Bethe-Block formalisms. TRIM, like A&Z, does not include contributions from nuclear recoil or contributions from secondary interactions. Computationally, TRIM is a very expensive code to run. TRIM is able to calculate a broadening in the energy of the beam; however, beam attenuation predictions are much less reliable. TRIM determines the overall beam attenuation in the whole stack to be less than one percent, whereas 7–10 % is expected. MCNP6 is arguably the most sophisticated approach to modeling the physics of the experiment. It also uses a stochastic procedure for calculation, adopting the Cascade-Exciton Model (CEM03) to track particles. The physics card is enabled in the MCNP input to track light ion recoils. Contributions from neutron and proton secondary particle interactions are included, although their contribution is minimal. For both MCNP and TRIM, the proton beam is simulated as a pencil beam. To find the current, an F4 volumetric tally of proton flux from MCNP simulation is matched to the experimental current for the first foil in the stack. Subsequent foil currents are calculated relative to the first foil based on MCNP predictions for beam attenuation. The equation used for calculating the current from the experi-mental activity is [5]: where: is the cross section for the process, [mbarns] is the atomic mass of the target [amu] is the is the number of product nuclei pre-sent at End-of-Bombardment is the average beam current, [μA] is the density of the target material, [g/cc] is the target thickness, [cm] is the decay constant, [s−1] is the irradiation time, [s] For each foil in the experimental stack, we also have a statistically driven broadening of the incident energy. The beam energy is modeled as a Gaussian distribution, with the tallies for each energy bin determining the parameters of the fit. TABLE 1 and FIG. 3 summarize the mean energy and standard deviation of the energy for each aluminum monitor foil. To address the energy distribution, we calculate an effective or weighted cross-section. It is especially important to account for energy broadening in regions where the associated excitation function varies rapidly. In the excitation function, we see a strong variation in the energy range from 30–65 MeV, the energy region cov-ered by the last 3 foils in the stack. Cross section weighting also accounts for the mean energy variation within each foil. The excitation function will overlay the Gaussian shaped flux distribution, giving rise to a lateral distribution where incrementally weighted values of the cross section are determined by the flux tally of the corresponding energy bin. With the effective cross section and the current at each of the foils, it is straight-forward to calculate the number of 22Na atoms created and the activity of each foil using the previously stated equation. Results and Conclusion The general trend in the amount of activity produced follows the shape of the excitation func-tion for the 27Al(p,x)22Na reaction. Small shifts in the incident energy upstream trickle down to produce much more pronounced shifts in the energy range of foils towards the back of the foil stack. The characteristic “rolling over” of the activity seen in the experimental foils indicates that the 6th foil must be in the energy region below 45 MeV, where the peak of the excitation function occurs. Conservatively, computational simulations are able to accurately determine the proton beam’s energy for an energy range from 100 to 50 MeV. As the beam degrades below 50 MeV, computa-tional simulations diverge from experimentally observed energies by over-predicting the energy. This observation has been noted in past studies [6,7] that compare the stacked foil technique with stopping-power based calculations. A complement of experimental and computational predictions allows for energy determinations at several points within target stacks. While this study focuses on an Al-Tb foil stack, the analysis of a similar Al-Th foil stack resulted in the same conclusions. Although we do not have a concurrent time-of-flight energy measurement at the time of the foil stack experiments, it is reasonable to assume that the energy at the time of the stacked foil experiments was also lower than the assumed energy of 100 MeV. Computational simulations developed in this work firmly support this assumption. Various computational models are able to predict with good agreement the energy as a function of depth for complex foil stack geometries. Their predictions diverge as the beam energy distribution broadens and statistical uncertainties propagate. A careful inspection of the codes reveals that these discrepancies likely originate from minute differences between the cross sections and stopping power tables that MCNP and TRIM/A&Z use respectively

    A pedagogic appraisal of the Priority Heuristic

    Get PDF
    We have explored how science and mathematics teachers made decisions when confronted with a dilemma in which a fictitious young woman, Deborah, may choose to have an operation that might address a painful spinal condition. We sought to explore the extent to which psychological heuristic models, in particular the Priority Heuristic, might successfully describe the decision-making process of these teachers and how an analysis of the role of personal and emotional factors in shaping the decision-making process might inform pedagogical design. A novel aspect of this study is that the setting in which the decision-making process is examined contrasts sharply with those used in psychological experiments. We found that to some extent, even in this contrasting setting, the Priority Heuristic could describe these teachers' decision-making. Further analysis of the transcripts yielded some insights into limitations on scope as well the richness and complexity in how personal factors were brought to bear. We see these limitations as design opportunities for educational intervention
    corecore